Pii: S0893-6080(99)00056-8
نویسندگان
چکیده
We suggest that any brain-like (artificial neural network based) learning system will need a sleep-like mechanism for consolidating newly learned information if it wishes to cope with the sequential/ongoing learning of significantly new information. We summarise and explore two possible candidates for a computational account of this consolidation process in Hopfield type networks. The “pseudorehearsal” method is based on the relearning of randomly selected attractors in the network as the new information is added from some second system. This process is supposed to reinforce old information within the network and protect it from the disruption caused by learning new inputs. The “unlearning” method is based on the unlearning of randomly selected attractors in the network after new information has already been learned. This process is supposed to locate and remove the unwanted associations between information that obscure the learned inputs. We suggest that as a computational model of sleep consolidation, the pseudorehearsal approach is better supported by the psychological, evolutionary, and neurophysiological data (in particular accounting for the role of the hippocampus in consolidation). q 1999 Elsevier Science Ltd. All rights reserved.
منابع مشابه
The consolidation of learning during sleep: comparing the pseudorehearsal and unlearning accounts
We suggest that any brain-like (artificial neural network based) learning system will need a sleep-like mechanism for consolidating newly learned information if it wishes to cope with the sequential/ongoing learning of significantly new information. We summarise and explore two possible candidates for a computational account of this consolidation process in Hopfield type networks. The "pseudore...
متن کاملBest approximation by Heaviside perceptron networks
In Lp-spaces with p an integer from [1, infinity) there exists a best approximation mapping to the set of functions computable by Heaviside perceptron networks with n hidden units; however for p an integer from (1, infinity) such best approximation is not unique and cannot be continuous.
متن کاملPii: S0893-6080(99)00042-8
This paper presents a theoretical analysis on the asymptotic memory capacity of the generalized Hopfield network. The perceptron learning scheme is proposed to store sample patterns as the stable states in a generalized Hopfield network. We have obtained that n 2 1 and 2n are a lower and an upper bound of the asymptotic memory capacity of the network of n neurons, respectively, which shows th...
متن کاملControl of exploitation-exploration meta-parameter in reinforcement learning
In reinforcement learning (RL), the duality between exploitation and exploration has long been an important issue. This paper presents a new method that controls the balance between exploitation and exploration. Our learning scheme is based on model-based RL, in which the Bayes inference with forgetting effect estimates the state-transition probability of the environment. The balance parameter,...
متن کاملPii: S0893-6080(99)00058-1
The aim of the paper is to investigate the application of control schemes based on “internal models” to the stabilization of the standing posture. The computational complexities of the control problems are analyzed, showing that muscle stiffness alone is insufficient to carry out the task. The paper also re-visits the concept of the cerebellum as a Smith’s predictor. q 1999 Elsevier Science Ltd...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999